首页> 外文OA文献 >Learning Common and Specific Features for RGB-D Semantic Segmentation with Deconvolutional Networks
【2h】

Learning Common and Specific Features for RGB-D Semantic Segmentation with Deconvolutional Networks

机译:学习RGB-D语义分割的通用和特定功能   与反卷积网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this paper, we tackle the problem of RGB-D semantic segmentation of indoorimages. We take advantage of deconvolutional networks which can predictpixel-wise class labels, and develop a new structure for deconvolution ofmultiple modalities. We propose a novel feature transformation network tobridge the convolutional networks and deconvolutional networks. In the featuretransformation network, we correlate the two modalities by discovering commonfeatures between them, as well as characterize each modality by discoveringmodality specific features. With the common features, we not only closelycorrelate the two modalities, but also allow them to borrow features from eachother to enhance the representation of shared information. With specificfeatures, we capture the visual patterns that are only visible in one modality.The proposed network achieves competitive segmentation accuracy on NYU depthdataset V1 and V2.
机译:在本文中,我们解决了室内图像的RGB-D语义分割问题。我们利用可以预测像素级类标签的反卷积网络,并开发了一种用于多种模态反卷积的新结构。我们提出了一种新颖的特征转换网络,以桥接卷积网络和反卷积网络。在特征转换网络中,我们通过发现两个模态之间的共性来关联这两个模态,并通过发现模态特定的特征来表征每个模态。有了共同的特征,我们不仅使两种模式紧密相关,而且还允许它们从彼此借用特征以增强共享信息的表示。通过特定的功能,我们捕获了仅在一种模态中可见的视觉模式。拟议的网络在NYU深度数据集V1和V2上实现了竞争性的分割精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号